Search Results: "tachi"

24 June 2010

Matthew Garrett: Joojoo

I've had the opportunity to look into the Joojoo tablet recently. It's an interesting device in various ways, ranging from the screen being connected upside down and everything having to be rotated before display, to the ACPI implementation that's so generic it has no support for actually attaching most embedded controller interrupts to ACPI devices and so relies on a hacked kernel that exposes individual interrupts as ACPI events that are parsed in userspace, to the ChangeOrientation binary that's responsible for switching between landscape and portrait modes containing gems like ps aux grep fgplayer grep -v grep and containing references to org.freedesktop.PandaSystem, a somewhat gratuitous namespace grab. Hardware-wise it seems to be little more than battery, generic nvidia reference design board and touchscreen with an accelerometer and LED glued to the chipset's GPIO lines. The entire impression is one of an ambitious project not backed up by the level of technical expertise required to get things done properly. Frankly, I think Michael Arrington came out of this rather better than he could have done - behind the reasonably attractive UI, the entire device is pretty much held together by string and a following wind.

Of course, releasing shoddily put together technology isn't generally illegal and from that point of view Fusion Garage aren't any worse than a number of products I've had the misfortune to actually spend money on. But they're distributing Linux (stock Ubuntu with some additional packages and a modified kernel) without any source or an offer to provide source. I emailed them last week and got the following reply:

Dear Sir,

we are still actively making changes to the joojoo software. We will make
the source release available once we feel we are ready to do so and also
having the resources to get this sorted out and organized for publication.
We seek your kind understanding on our position and appreciate your
patience on this. Thank you.

Best Regards
joojoo Support Team


Strong work, Fusion Garage. Hardware and software may not be your strong points, but you're managing copyright infringement with the best of them.

14 June 2010

Russell Coker: Should Passwords Expire?

It s widely regarded that passwords should be changed regularly. The Australian government declared last week the National Cyber Security Awareness Week [1] and has published a list of tips for online security which includes Get a stronger password and change it at least twice a year . Can a Password be Semi-Public? Generally I think of a password as being either secret or broken. If a password is known to someone other than the sysadmin and the user who is authorised to use the account in question then you have potentially already lost all your secret data. If a password is disclosed to an unauthorised person on one occasion then merely changing the password is not going to do any good unless the root cause is addressed, otherwise another anothorised person will probably get the password at some future time. Hitachi has published a good document covering many issues related to password management [2]. I think it does a reasonable job of making sysadmins aware of some of the issues but there are some things I disagree with. I think it should be used as a list of issues to consider rather than a list of answers. The Hitachi document lists a number of ways that passwords may be compromised and suggests changing them every 60 to 90 days to limit the use of stolen passwords. This seems to imply that a password is something that s value slowly degrades over time as it s increasingly exposed. I think that the right thing to do is to change a password if you suspect that it has been compromised. There s not much benefit in having a password if it s going to be known by unauthorised people for 89 days before being changed! Fundamentally a password is something that can have it s value rapidly drop to zero without warning. It doesn t wear out. Why are terms such as Three Months used for Maximum Password Ages? The Hitachi document gives some calculations on the probability of a brute-force attack succeeding against a random password with 90 days of attacking at a rate of 100 attempts per second [2]. I think that if a service is run by someone who wouldn t notice the load of 100 attempts per second then you have bigger security problems than the possibility of passwords being subject to brute-force attacks. Also it s not uncommon to have policies to lock accounts after as few as three failed login attempts. Rumor has it that in the early days of computing when the hashed password data was world readable someone calculated that more than 3 months of CPU time on a big computer would be needed to obtain a password by brute-force. But since then the power of individual CPUs has increased dramatically, computers have become cheap enough that anyone can easily gain legal access to dozens of systems and illegal access to a million systems, and it has become a design feature in every OS that hashed passwords are not readable by general users. So the limiting factor is to what degree the server restricts the frequency of password guesses. I don t think that specifying the minimum password length and maximum password age based on the fraction of the key space that could be subject to a brute-force attack makes sense. I don t think that any attempt to make an industry-wide standard for the frequency of password changes (as the government is trying to do) makes sense. Can there be a Delay between a Password being Compromised and being Used by an Attacker? Hypothetically speaking, if a password was likely to be compromised (EG by having the paper it was written on lost or stored insecurely) for some time before an attacker exploited it, then if the password was changed during that time period it could solve the problem. For example when a company moves office there is the possibility of notepaper with passwords to be lost. So if the sysadmin caused every user password to expire at the time of the move then a hostile party would be unable to gain access. Another possibility is the theft of backup tapes that contain the list of unencrypted passwords. If users change their passwords every three months then the theft of some four month old backup tapes will be less of a problem. Another possibility concerns the resale of old computers, phones, and other devices that may contain passwords. A reasonably intelligent user won t sell their old hardware as soon as the replacement device arrives, they will want to use the new device for some time to ensure that it works correctly. If passwords expire during this trial period with the new device then passwords stored in the old device won t have any value. The down-side to this idea is that people probably sell their old gear fairly quickly and making passwords expire every two weeks would not be accepted well by the users. It seems to me that having bulk password changes (all passwords for one user or for one company) based on circumstances that lead to potential insecurity would do more good than changing passwords at a fixed schedule. How are Passwords typically Compromised? Dinei Flor ncio and Cormac Herley of Microsoft Research and Baris Coskun of Brooklyn Polytechnic University wrote a paper titled Do Strong Web Passwords Accomplish Anything? [3] which discusses this issue. The first thing that they note is that nowadays passwords are most commonly compromised by phishing and keylogging. In those cases passwords are typically used shortly after they are stolen and the strength of a password never does any good. That paper suggests that banks should use stronger user-names rather than stronger passwords to combat the threat of bulk brute-force attacks. Can a Password Last Forever? If a password is entered in a secure manner, authenticated by a secure server, and all network links are encrypted or physically protected then there should never be a need to change it. Of course nothing is perfectly secure, so for some things with minimal short-term value or which can be used without anyone noticing there is a benefit in changing the password. But in the case of Internet banking if a hostile party gets your login details then you will probably know about it in a few days when the bank calls you about the unusual transactions from foreign countries long before a 90 day password change schedule would have done any good. Maybe one of the issues determining whether a password should be changed regularly is whether an attacker could use long-term read-only access to gain some benefit. Being able to read all the email someone received for a year could be a significant benefit if that person was a public figure, and there s usually no way for an ISP customer to know that someone else is downloading all their mail via POP or IMAP. Should a Password be the only Authentication Method? It is generally agreed that an authenitcation method should ideally involve something you have plus something you know. That means a password and a physical device such as a smart card, token with a changing sequential password, or a key such as a Yubikey [4]. If the physical device can t be cloned (through some combination of technical difficulty and physical access control) then it significantly improves security. When a physical device is used the purpose of the password is merely to stop someone who steals the physical device from being immediately exploit everything the password only has to be strong enough to keep the accounts secure until a new token can be issued. The combination of something you have and something you know is very strong. Having a physical token stored on the desk next to the PC that is used for logging in provides a significant benefit, then an attacker needs to break in to the house and can t sniff the password by compromising the PC remotely. Conclusion In all aspects of security you need to consider what threats you face. If an attacker is likely to launch an immediate noisy attack (such as transferring the maximum funds out of an Internet banking account) then changing the password regularly won t do any good. If a subtle long-term attack is expected then changing the password can do a lot of good but a physical token is the ideal if the account is valuable enough. But to put things in to perspective, it s technically possible to use a mobile phone camera at close range (or a SLR with a big lens at long range) to take a photo of keys that allow reproducing them. But this hasn t stopped people from carrying their house keys in an obvious manner that permits photography or leaving them on their desk at work. Also I ve never heard of anyone routinely changing the door locks in case a hostile party might have got a key although I m sure that such practices are common in highly secure locations. Few people even take their house keys off the key-ring when they have their car serviced! Related Posts Defense in Depth and Sudo when using sudo can increase security and when it can t.
Logging Shell Commands how to log what the sysadmin does and what benefits that provides you, it doesn t help if the sysadmin is hostile.
Logging in as Root should you login directly as root?

1 April 2010

Adrian von Bidder: SCO: Ready to give up?

The answer is probably no way! I like the way Jon Corbet put it (article will be free in a few weeks):
The SCO affair is kind of like a bad zombie movie; the plot is implausible, the acting is horrible, and, even though you know the good guys must win in the end, that obnoxious zombie just keeps coming back and ruining the party.
Realistically, what will happen? The judge in the Chapter 11 case has so far been quite interested in helping SCO succeed, so I guess there will be an appeal on this. The contract claims in the cases surrounding this are probably not very relevant, so I'm not sure how much press they'll get (although I'm sure Pam will continue to cover the issue although I think the Apple vs. Hitachi patent cause should be the one that should be in the center of attention right now.) And, of course, there's the Canonical vs. SPI lawsuit coming up. I find it very difficult to tell in what direction that will get decided, but I hope it won't take as long as the SCO case.

9 February 2010

Martin-Éric Racine: OpenOffice's style editing dialogs suck!

Working on some document today, it occurred to me, once again, that OpenOffice's method for designing and applying documents styling totally sucks! Granted, this was not the first time that I cam to this conclusion but, today, I've come to realize that OpenOffice's paradigms constantly make me waste time trying to form a mental image of how every style element is suppose to relate to the other one, yet without having the full picture available within a single, easy-to-read document. Also, there is a complete lack of consistency in how style elements work. Some want to be defined in millimeters, while others want to be defined in points, while other still in number of lines. What a mess! In short: to become remotely usable, OpenOffice needs to approach document styling via the "HTML document with a separate CSS style sheet" paradigm. In other words, I need to be able to edit styles globally, as a group and separately from the document content itself, rather than having to click my way through a multitude of dialogs, for each and every type of text elements. To compare this with web design, there, I can focus on the actual content, formatted around semantic text elements (headers, paragraphs, block quotes, etc.) and then decide on the presentation styling as a separate global process by attaching a CSS style sheet, in which the relation between each type of text element and how it will be displayed is crystal clear, because it's handled as a unified style editing process. I think that this is one area in which Free Software could innovate in a positive way, by distancing itself from the Redmondesque practices of Microsoft Word, from which OpenOffice borrows too much. How about having a proper Style Editor application (similar to a CSS editor), within the OpenOffice suite, while Open Writer itself would only be allowed to load the style sheets produced by it and to apply them to semantic text text elements?

7 February 2010

Ben Armstrong: Eee PC care at 2 years

In November 2007, we bought two of the first Eee PC netbooks available in North America: the model 4G in pearl white, one for me, and one for my wife. My plan was to start the Debian Eee PC project and get Debian working on them both which, thanks to the work of everyone on the team, has been a great success. Fast forward to February 2010, and they are still serving us well. Here is a list of hardware enhancements and issues over the past two years of continuous use: Battery life Asus claims 3.5hrs for the model 4G, though I don t think we ever experienced anything better than 2.5 hrs with wifi in operation. Today, I m down to something short of 2 hrs. If you consider the Lithium ion battery article on Wikipedia to be accurate, it is typical to permanently lose 20% capacity per year, so this sounds about right for the age of the battery. Touchpad The touchpad itself is fine. The buttons are not. They are hard to push and prone to become less responsive after a while, or to fail entirely. I now really have to bear down hard on my left button to make it work. Because the buttons are soldered to the motherboard, they would be hard to replace. Configuring the synaptics driver for tap-to-click is an acceptable workaround. Also, one of the first purchases we made was the Logitech VX Nano, one for each Eee, so most of the time now, we use our mice, falling back to the touchpad only when using a mouse is inconvenient due to space constraints (e.g. on the bus). Keyboard After two years of continuous use, both of our keyboards needed replacing. Some of the keys had been pounded flat and needed surgery (rubber springs sliced out of a scavenged Acer netbook keyboard and grafted on) to make them usable again. And aesthetically, the keyboards have really suffered, with a good percentage of the markings on the keycaps partially or completely eroded off. Fortunately, for $3 plus $10 shipping each, new keyboards can be ordered from a Chinese e-Bay seller. But with the new keyboards, there is still the problem with excessive flex in the keyboard making some keys less responsive than others. I m considering trying this hack to solve the problem. Display I have no complaints about the matte finish displays of either Eee. Both have stood up well. People tell me they don t care for a 7 display, but I find it bright, crisp and easy to get along with, so long as you maximize windows and increase font size as needed (e.g. for reading PDF books and magazines). When I really need the extra real estate, I just hook up an external monitor. Fan Some people complain about the noise of the fan, from the otherwise silent 4G (due to the lack of a hard drive) and seek ways to limit the amount of time the fan runs. We ve never had that problem. However, if any plastic bits break off inside the Eee, they will eventually find their way to your fan and get wedged. This has happened twice: once in my wife s Eee, and once in mine. Disassembly of the Eee to clear the blockage and get the fan running again was relatively painless both times. Plastic bits About those plastic bits: In my wife s case, I think the bit was some extruded plastic from the molding that had broken off. In my case, it was a deteriorating plastic post which was holding the hinge in place. More about this later. Sound Unfortunately, the integrated sound seems to be particularly sensitive to heat. During both fan failures, sound cut out. On my wife s system, it never fully recovered, and now only works on alternate Tuesdays, whereas on mine, after I unwedged the fan and brought the heat down, sound has functioned properly ever since. (Update: see my comments below on power adapters for an alternate theory about what killed sound.) We are waiting now for the delivery of a cheap ($1.42 including shipping, again from China) 3D Sound USB device which is reported to work on Linux. While some users complain about hum with this device, we re cautiously optimistic that on the Eee it won t be an issue. If it is, oh well, a buck and a half is a small gamble. Display hinge The plastic posts that hold the anchors for the machine screws for the hinge are fragile. Three out of four of these posts in my Eee have shattered, leaving the left hinge to float freely. For the past two days I was distressed about how I could fix this. Based on some helpful suggestions from #eeepc @ irc.freenode.net, I had worked out a plan to rebuild one of the posts with two-part epoxy, sand it and the fractured stub of the post, and use cyanoacrylate glue to affix it to the case. It sounded like fiddly work for which I didn t hold much hope of success. But this morning, I realized that the plastic in the bottom of the case was still sound, and that I could drop a screw into that hole, and with pliers, line up the anchor on the other side. This appears to work! I ll handle that hinge with care, but it should do me until I eventually repurpose the machine to some daily use that is less stressful on the hinge than daily travel on the bus. SSD and SDHC Some people were initially worried that the limited write life an SSD meant that you needed to take special measures to avoid its premature death, which I have always regarded as a myth. Our 4G SSDs each have no swap space, and ext3 filesystems. At the two year point, there are no problems. I expect our SSDs to outlast the other components. We did, however, eventually find the 4G to be a bit on the small side. We now each have Kingston 4G micro-SDHC cards for a bit of extra capacity for large media files and for some extra space for the apt cache. I recently priced the Lexan version of this card at $19 for a two-pack at Wal-mart, and these seem to work in the model 4G just as well (which seems to be an issue with this model not all SDHC cards work). Memory We found the 512M that came with the 4G somewhat constraining. Without any swap space (so we could maximize the space available on the small SSD) we decided our systems really needed a memory upgrade. At the time, 2G didn t seem overly expensive (though I no longer remember the exact price) so we splurged on 2G for each system, even though that was probably more than we needed. Power adapter When I first wrote this article, I forgot to mention the power adapter. The model 4G uses an unusual 9.5V power adapter for which it is hard to find a generic replacement. This unit is prone to fail in two places: in rare cases, (one out of 6 units that I have in some way assisted with: 3 of them mine, 3 belonging to others,) it will break at the wall plug end. The plug swivels into the unit for easy storage, but this adds a point of failure. More commonly, (in 3 out of six of the units,) the wires at the netbook plug end will fray due to stress on the non-angled cord, leading either to no connect, or to a short. In fact, since in the case of my wife s adapter, the netbook end of the cable shorted, making the cable heat up and the system spontaneously shut down after a little while, we can t say for sure whether the overheating of the Eee took out integrated sound, or whether this short did. On the one hand, I know for certain my own system, which never suffered a short, has responded to overheating by making the sound flaky, but it has never taken out sound completely. On the other hand, shorting the power cable can t be very good for the Eee, so who knows, maybe that s what ultimately killed my wife s Eee s sound. This is where it helps to have soldering skills, or a friend who can do the repair for you. In my case, it was the latter. While I was waiting for a replacement adapter I had ordered, we also found on eeeuser.com part#s for replacement plugs and sleeves from Digikey, total cost $20 including shipping, 10 pieces each. Cutting off the bad plugs on three units and attaching the new was a quick, straightforward operation, and succeeded in two out of three cases. Having carefully checked the failed repair, we deduced that the failure in that unit was at the wall end, and we judged there was nothing further we could do to try to save it. Even the unit that had previously shorted was returned to perfect health and has been in operation trouble-free ever since. Conclusion After over two years of continuous use, the model 4G has held up well for a $400 system. We definitely feel we have received our money s worth of value over that time. All of the problems we ve experienced so far have been fixable at very little expense, and we expect them to last at least another year before we seriously consider replacing them. In the next several months, I plan to order a high capacity 8-cell 10400mAh battery for my own system so that I can enjoy a 5-6 hr run time. The purchase will be roughly $55 including shipping, the most expensive purchase for my system to date, but still well worth it to extend the 4G s lifespan for another year or more.

1 December 2009

Dirk Eddelbuettel: Updated slides for 'Introduction to HPC with R' (now with correct URLs)

This is an updated version of yesterday's post with corrected URLs -- by copy-and-pasting I had still referenced the previous slides from UseR! 2009 in Rennes instead of last Friday's slides from the ISM presentation in Tokyo. The presentations page had the correct URLs, and this has been corrected below for this re-post. My apologies! As mentioned yesterday, I spent a few days last week in Japan as I had an opportunity to present the Introduction to High-Performance Computing with R tutorial at the Institute for Statistical Mathematics in Tachikawa near Tokyo thanks to an invitation by Junji Nakano. An updated version of the presentations slides (with a few typos corrected) is now available as is a 2-up handout version. Compared to previous versions, and reflecting the fact that this was the 'all-day variant' of almost five hours of lectures, the following changes were made: Comments and suggestions are, as always, appreciated.

Dirk Eddelbuettel: Updated slides for 'Introduction to HPC with R'

As mentioned yesterday, I spent a few days last week in Japan as I had an opportunity to present the Introduction to High-Performance Computing with R tutorial at the Institute for Statistical Mathematics in Tachikawa near Tokyo thanks to an invitation by Junji Nakano. An updated version of the presentations slides (with a few typos corrected) is now available as is a 2-up handout version. Compared to previous versions, and reflecting the fact that this was the 'all-day variant' of almost five hours of lectures, the following changes were made: Comments and suggestions are, as always, appreciated.

30 November 2009

Dirk Eddelbuettel: Back from Tokyo

Just got back from Tokyo a few hours ago: I had an opportunity to give my 'Introduction to High-Performance Computing with R' tutorial / lectures (for which previous slides can be found here). This was an all-day talk at the Institute for Statistical Mathematics at their new site in Tachikawa in the greater Tokyo area, thanks to an invitation by Junji Nakano. Lisa and I turned this into a brief one-week trip to Kyoto and Tokyo, and we had a truly wonderful time on what was our first visit to Japan. I should blog some more about it, but now I will give in to the jet lag and catch up on some sleep...

25 October 2009

Julian Andres Klode: I m back


I returned to Germany from my vacation in Greece yesterday, and I just installed my new hard drive into my laptop. The old Hitachi hard drive had some bad sectors after a very long usage time (compared to my other disks) we ll see how the new Samsung SpinPoint M7 will work. Another side benefit is the upgrade from 120GB to 500GB which means I don t have to delete files during the next months. It is also much faster (hdparm -t was 80MB/s, 2 times faster than the old one). I m currently running the release candidate of Ubuntu 9.10 Karmic Koala on this system, but I expect to return to my full Debian unstable developer environment during the next week(s). Karmic seems to be pretty stable already, but I have experienced problems with PulseAudio and my PCM control getting set to +13dB which is horrible. I will probably also pre-order a Nokia N900 soon, which will be my largest investment this year but I really need a new mobile phone, camera and music player. And not to forget the ability to develop software for it, and testing my software on those less powerful devices (compared to my laptop). All in all, I m back and soon ready to hack again. Posted in General

23 October 2009

Adrian von Bidder: Color Calibration

A topic I've been trying to understand more about for some time is color calibration. I've used lprof to get a very rough starting point for my old CRT (sorely in need of replacement by now...) Since hardware colorimeters are not cheap and probably not many can be used with Linux (I've not really looked, to be honest) I've wondered if I couldn't use the one device that is (should be) calibrated that I already have: my camera. With some kind of feedback loop like shoot screen, analyze img file, adjust settings, shoot again (and the same process for the printer, and, possibly a bit more error prone since the freshly calibrated printer would be used to produce the template, the scanner), shouldn't it be possible to arrive at sane settings? The camera explicitly allows using sRGB or Adobe RGB color model, so I'd think the colors should be more or less narrowly defined, at least when shooting raw or using manual white balance. (Another thought: I have a dual screen system. Can X even do this? And if X can do it, is it possible to tell Gimp to set this up? But this is just idle speculatio. I've not really looked at Google's results yet, either.) Update: Haven't really thought about ambient light. But the display emits light, so I'd shoot the screen in a darkened room, with white balance of the camera set manually. OTOH thanks Jo l if I can get a colorimeter for not much above 100$, it's not worth investing too much time. I had always thought these devices were much more expensive. divide_by_zero Yes, I know CRT are usually much better than LCD, but OTOH my screen apparently starts to show its age: it will suddenly, and visibly, change brightness and color every few hours. I suspect the high voltage circuitry is not too stable anymore... And I won't buy another CRT, those things are just huge...

1 October 2009

Gunnar Wolf: Strange scanning on my server?

Humm... Has anybody else seen a pattern like this? I am getting a flurry of root login attempts at my main server at the University since yesterday 7:30AM (GMT-5). Now, from the machines I run in the 132.248.0.0/16 network (UNAM), only two listen to the world with ssh at port 22 And yes, it is a very large network, but I am only getting this pattern on one of them (they are on different subnets, quite far apart). They are all attempting to log in as root, with a frequency that varies wildly, but is consistently over three times a minute right now. This is a sample of what I get in my logs: [update] Logs omitted from blog post, as it is too wide and breaks displays for most users. You can download the log file instead. Anyway This comes from all over the world, and all the attempts are made as root (no attempts from unprivileged users). Of course, I have PermitRootLogin to no in /etc/ssh/sshd_config, but I want to understand this as much as possible. Initially it struck me that most of the attempts appeared to come from Europe (quite atypical for the usual botnet distribution), so I passed my logs through:
  1. #!/usr/bin/perl
  2. use Geo::IP;
  3. use IO::File;
  4. use strict;
  5. my ($geoip, $fh, %by_ip, %by_ctry);
  6. $fh = IO::File->new('/tmp/sshd_log');
  7. $geoip=Geo::IP->new(GEOIP_STANDARD);
  8. while (my $lin = <$fh>) next unless $lin =~ /rhost=(\S+)/; $by_ip $1 ++ ;
  9. print " Incidence by IP:\n", "Num Ctry IP\n", ('='x60),"\n";
  10. for my $ip ( sort $by_ip $a <=> $by_ip $b keys %by_ip)
  11. my $ctry = ($ip =~ /^[\d\.]+$/) ?
  12. $geoip->country_code_by_addr($ip) :
  13. $geoip->country_code_by_name($ip);
  14. $by_ctry $ctry ++;
  15. printf "%3d %3s %s\n", $by_ip $ip , $ctry, $ip;
  16. print " Incidence by country:\n", "Num Country\n", "============\n";
  17. map printf "%3d %s\n", $by_ctry $_ , $_
  18. sort $by_ctry $b <=> $by_ctry $a
  19. keys(%by_ctry);
The top countries (where the number of attempts 5) are:
  1. 104 CN
  2. 78 US
  3. 58 BR
  4. 49 DE
  5. 43 PL
  6. 20 ES
  7. 20 IN
  8. 19 RU
  9. 17 CO
  10. 17 UA
  11. 16 IT
  12. 13 AR
  13. 12 ZA
  14. 10 CA
  15. 10 CH
  16. 8 GB
  17. 8 AT
  18. 8 JP
  19. 8 FR
  20. 7 KR
  21. 7 HK
  22. 7 PE
  23. 7 ID
  24. 6 PT
  25. 5 CZ
  26. 5 AU
  27. 5 BE
  28. 5 SE
  29. 5 RO
  30. 5 MX
I am attaching to this post the relevant log (filtering out all the information I could regarding legitimate users) as well as the full output. In case somebody has seen this kind of wormish botnetish behaviour lately please comment. [Update] I have tried getting some data regarding the attacking machines, running a simple nmap -O -vv against a random sample (five machines, I hope I am not being too agressive in anybody's eyes). They all seem to be running some flavor of Linux (according to the OS fingerprinting), but the list of open ports varies wildly I have seen the following:
  1. Not shown: 979 closed ports
  2. PORT STATE SERVICE
  3. 21/tcp open ftp
  4. 22/tcp open ssh
  5. 23/tcp open telnet
  6. 111/tcp open rpcbind
  7. 135/tcp filtered msrpc
  8. 139/tcp filtered netbios-ssn
  9. 445/tcp filtered microsoft-ds
  10. 593/tcp filtered http-rpc-epmap
  11. 992/tcp open telnets
  12. 1025/tcp filtered NFS-or-IIS
  13. 1080/tcp filtered socks
  14. 1433/tcp filtered ms-sql-s
  15. 1434/tcp filtered ms-sql-m
  16. 2049/tcp open nfs
  17. 4242/tcp filtered unknown
  18. 4444/tcp filtered krb524
  19. 6346/tcp filtered gnutella
  20. 6881/tcp filtered bittorrent-tracker
  21. 8888/tcp filtered sun-answerbook
  22. 10000/tcp open snet-sensor-mgmt
  23. 45100/tcp filtered unknown
  24. Device type: general purpose WAP PBX
  25. Running (JUST GUESSING) : Linux 2.6.X 2.4.X (96%), ( )
  26. Not shown: 993 filtered ports
  27. PORT STATE SERVICE
  28. 22/tcp open ssh
  29. 25/tcp open smtp
  30. 80/tcp open http
  31. 443/tcp open https
  32. 444/tcp open snpp
  33. 3389/tcp open ms-term-serv
  34. 4125/tcp closed rww
  35. Device type: general purpose phone WAP router
  36. Running (JUST GUESSING) : Linux 2.6.X (91%), ( )
  37. Not shown: 994 filtered ports
  38. PORT STATE SERVICE
  39. 22/tcp open ssh
  40. 25/tcp closed smtp
  41. 53/tcp closed domain
  42. 80/tcp open http
  43. 113/tcp closed auth
  44. 443/tcp closed https
  45. Device type: general purpose
  46. Running (JUST GUESSING) : Linux 2.6.X (90%)
  47. OS fingerprint not ideal because: Didn't receive UDP response. Please try again with -sSU
  48. Aggressive OS guesses: Linux 2.6.15 - 2.6.26 (90%), Linux 2.6.23 (89%), ( )
  49. Not shown: 982 closed ports
  50. PORT STATE SERVICE
  51. 21/tcp open ftp
  52. 22/tcp open ssh
  53. 37/tcp open time
  54. 80/tcp open http
  55. 113/tcp open auth
  56. 135/tcp filtered msrpc
  57. 139/tcp filtered netbios-ssn
  58. 445/tcp filtered microsoft-ds
  59. 1025/tcp filtered NFS-or-IIS
  60. 1080/tcp filtered socks
  61. 1433/tcp filtered ms-sql-s
  62. 1434/tcp filtered ms-sql-m
  63. 4242/tcp filtered unknown
  64. 4444/tcp filtered krb524
  65. 6346/tcp filtered gnutella
  66. 6881/tcp filtered bittorrent-tracker
  67. 8888/tcp filtered sun-answerbook
  68. 45100/tcp filtered unknown
  69. Device type: general purpose WAP broadband router
  70. Running (JUST GUESSING) : Linux 2.6.X 2.4.X (95%), ( )
  71. Not shown: 994 filtered ports
  72. PORT STATE SERVICE
  73. 22/tcp open ssh
  74. 25/tcp open smtp
  75. 53/tcp open domain
  76. 80/tcp open http
  77. 110/tcp open pop3
  78. 3389/tcp open ms-term-serv
  79. Warning: OSScan results may be unreliable because we could not find at least 1 open and 1 closed port
  80. Device type: firewall general purpose
  81. Running: Linux 2.6.X
  82. OS details: Smoothwall firewall (Linux 2.6.16.53), Linux 2.6.13 - 2.6.24, Linux 2.6.16
Of course, it strikes me that several among said machines seem to be Linuxes, but (appear to) run Microsoft services. Oh, and they also have P2P clients.
AttachmentSize
Results of parsing the logs21.58 KB
Relevant portion of the logs1.05 MB

13 September 2009

Brett Parker: Random Storage Idea...

So, I was reading some random mumblings on the interwebs, and the pigeon with USB stick being a quicker method of data transfer in SA than their intertubes... then there was a thread on a mailing list discussing this, and someone mentioned using stacks of microSD cards... so then my brain decided to launch itself orthogonally, as it often does. What we end up with is wondering just how much storage you could fit in to the space a 3.5" drive would normally take - not taking in to account any method of attaching it at this point. So, a 3.5" drive is approx 102mmx147mmx26mm, a microSD card is 11mmx15mmx1mm, so, assuming that we're just lying them side by side, and not optimising the way they're stored at all, you can fit (with space round the sides) 9*9*26 = 2106 microSD cards in the area of a 3.5" drive. Assuming that each of those is 16G that's just shy of 33.7TB of storage! So, that set me thinking a bit further... I reckon that in that space you could do a 6*6 grid of cards with room for connectors, and just about get it to 10 high...so, that's 360 microSD cards, and probably enough room for some control gear (haven't worked out quite what we'd use for that), I then went in to wondering if we could then create a small embedded system to talk to those 360 microSD cards, if we did that then you could potentially do RAID0 across the "platters" with RAID6 on each platter. Now, to my poor head that meant that there should be 340*16G of available storage, which is 5.44TB... of course, that involves somehow interfacing the 360 microSD cards... I'm thinking that it might be possible with some form of embedded system... Unfortunately, it appears that to actually build this with consumer components... and without including the interface gear which I haven't even begun to work out yet... we're talking around about 26 per microSD card, so, erm, 9360... but I still think it'd be a really neat project... now, if someone can arrange for me to win the lottery, have lots of spare time, and some more brain power... :) Oh dear... my brain appears to have ticked further through, and I've realised that with spacing between each microSD card, you could, in theory, easily fit 400 of them upended in the space available. Erm. Of course, this still doesn't answer the "how the hell do you then get them all to talk in any sane manner" question... but I'm sure that will work out in my head sometime...

9 May 2009

David Paleino: OpenPGP key transition announcement

Because of the recently announced attack against the SHA-1 digest algorithm, I finally decided to move away from my old 1024-bit DSA OpenPGP key, landing to a shiny new 4096-bit RSA one. The old key will continue to be valid for some time, but I prefer all future correspondence to come to the new one. I would also like this new key to be re-integrated into the web of trust. I'm attaching a file with
the complete text of the transition, clearsigned by both keys to certify the transition. You can find the transition statement also on http://www.hanskalabs.net/key-transition_20090509.txt . You can check the validity of the file with something like:
$ gpg --decrypt key-transition_20090509.txt   gpg --decrypt
If you need to transition your key too, and you're one of my signees, then let's please coordinate, so that we have the new keys cross-signed, without spurious signatures fromgoing-to-be-revoked keys Smile Thank you for your help.
AttachmentSize
key-transition_20090509.txt4.03 KB

25 April 2009

Ingo Juergensmann: Cleaning and modding of an A3000

Finally I was able to clean up one of my m68k buildds: Spice. Actually I regularly clean up all of my machines so they continue to work properly. Usually I do this maintenance work every year around christmas, but this time was different. Anyway. I took some pictures:



In fact I assumed more dust inside after 512 days of uptime, but I was suprised that the dust was moderate. So the cleaning was easy and fast. This is the result:



Next and final step was to enable the Laptop mode. Sadly there was no /proc/sys/vm/laptop_mode, but the final solution wasn't that difficult either to have a portable Amiga 3000:

Amiga 3000 in Laptop mode


Well, the reason for that Laptop mode is quite simply: although the A3000 is quite silent with its Papst fan, its SCSI/SCA disks aren't. To safe some noise I plan to move the A3000/autobuilder to the guestroom, but when there's some visitors, I want to be able to relocate the machine without interrupting the build. So attaching a UPS and a wireless bridge is the way to go for this purpose.

Russell Coker: Creating a Double-Ended Bun


The people who made the above magazine advert gave it two top-halves to the burger bun. But I think that there is actually a demand for such buns, and that it is possible to make them! Traditional buns have a flat bottom where they rest on a baking tray. One solution to this problem would be to bake in outer space, another possible solution would be to develop a rapid baking process that allows baking in a free-fall aeroplane, but both of these would be unreasonably expensive. It seems that it would be viable to bake double-ended buns by having a rapidly rising column of hot air to suspend the bun. The terminal velocity of a bun would probably not be that high (maybe 60Km/h) and it should be quite easy to have a pipe full of hot air that bakes the buns. As slight variations in the density and shape of the bun would affect the air-flow it would be necessary to closely monitor the process and adjust the air speed to keep the bun afloat. Manufacturing cheap ovens that use LASERs to monitor the position of the bun should not be difficult. This might blow the sesame seeds off the bun, but this problem may also be solvable through careful design of the bun shape to make it less aerodynamic and by strongly attaching the seeds. I m not sure how you would do this.

22 April 2009

Gunnar Wolf: The streets and the numbers of Cuernavaca

I usually use this blog to post about stuff I have written or that is somehow related to my work / professional life. This time, however, I'll just use it to share with you a short text my father published in the column he writes in the La Uni n de Morelos newspaper, Academia de Ciencias de Morelos: La Ciencia, desde Morelos para el mundo. My father has lived for over 20 years in Cuernavaca, la Ciudad de la Eterna Primavera (the city of the eternal Springtime), ~80Km south of Mexico City. Cuernavaca was for long years known mostly as a weekend city for Mexico City's middle-upper class, but it has grown way beyond that. According to Wikipedia, in Cuernavaca's metropolitan area there are about 700,000 inhabitants. And the city is blatantly built with no planning, no urbanistic analysis of any kind. I have been familiar for many years with the program to put some rational order renumber Cuernavaca's streets. Roberto Tamariz, the sociologist who built my father's house (yes, in a stroke of genius he acted as an architect for a piece of land he had bought - the results are quite decent, given his real occupation, but his lack of architectural background tends to literally come up from the floor every now and then), was involved in a municipal project to renumber Cuernavaca's streets, probably some 15 years ago. Anyway, Roberto's work was never really finished due to the severe lackings of our political culture. In this (one-and-a-third pages long, quite easy to read) text I am attaching to this post, my father writes about Cuernavaca's strange street naming system, the mathematical solutions (and political intrincacies) about renumbering a two dimensional space (and, of course, he could not help but wander into the third- and fourth-dimentional spaces, although very briefly). I like how this guy writes, all in all. Enjoy!
AttachmentSize
Calles y numeros de Cuernavaca.pdf231.99 KB

Matthew Palmer: Really, Really Distributed Revision Control

While I'm not a fan of cut-n-paste coding, on the odd occasion it's handy to grab a snippet of code out of someone else's blog and plop it in. Jeff Atwood of Coding Horror has a solution to some of the downsides of ripping a bit of code from somewhere on the Internet: publishers of code snippets generate a new GUID, tag each code snippet with a GUID, and you paste that comment in with the rest of the code when you take it. Then, if you (or someone else) needs to look at the context of the chunk of code, find it's author, look for improvements or commentary, or see who else has used that snippet in their own projects, you can search for that GUID and you should come up with only that GUID as a result.
What I propose is this:
// codesnippet:1c125546-b87c-49ff-8130-a24a3deda659
- (void)fadeOutWindow:(NSWindow*)window 
        // code
 
Attach a one line comment convention with a new GUID to any code snippet you publish on the web. This ties the snippet of code to its author and any subsequent clones. A trivial search for the code snippet GUID would identify every other copy of the snippet on the web:
http://www.google.com/search?q=1c125546-b87c-49ff-8130-a24a3deda659
How very, very cunning. He also proposes that if you modify a snippet and republish it, you should keep the old GUID and also add one of your own, so you can track the origin and help other people find your alternate version. Attaching unique identifiers to chunks of code and sending them all around the Internet. Does this sound like distributed revision control to anyone else? A lot easier to master than Git's UI, too...

21 April 2009

Ingo Juergensmann: Finally finished!

This morning the build of the Parma Polyhedra Library (ppl) finished on my m68k buildd:

buildd@spice:~$ head logs/ppl_0.10-4_20090228-2019
Automatic build of ppl_0.10-4 on spice by sbuild/m68k 98
Build started at 20090228-2019
....
Finished at 20090421-0636
Build needed 1231:16:23, 1342752k disk space


Yes, the build needed 51 days 10:16:23 on that 68040/40, mostly because the build needed some amount of memory that exceeded the available 64 MB. Nevertheless I'm not the only one that wasn't very happy about the long build times of ppl as bug #517659 shows. It kept me away from shutting down the buildd for a long time to do the yearly clean up of the inner parts (removing the dust), but led to a rather high uptime as already reported. Now I can shutdown the machine for the clean up and attaching it to a UPS in the next days - after more than 510 days!

19 April 2009

Martin F. Krafft: Extending the X keyboard map with xkb

xmodmap has long been the only way to modify the keyboard map of the X server, short of the complex configuration daemon approaches used by the large desktop managers, like KDE and GNOME. But it has always been a hack: it modifies the X keyboard map and thus requires a baseline to work from, kind of like a patch needs the correct context to be applicable. Worse yet, xmodmap weirdness required me to invoke it twice to get the effect I wanted. When the recent upgrade to X.org 7.4 broke larger parts of my elaborate xmodmap configuration, I took the time to finally ditch xmodmap and implement my modifications as proper xkb configuration.

Background information I had tried before to use per-user xkb configuration, but could not find the answers I want. It was somewhat by chance that I found Doug Palmer s Unreliable Guide to XKB configuration at the same time that Julien Cristau and Matthew W. S. Bell provided me the necessary hints on the #xorg/irc.freenode.org IRC channel to get me started. The other resource worth mentioning is Ivan Pascal s collection of XKB documents, which were instrumental in my gaining an understanding of xkb. And just as I am writing this document, Debian s X Strike Force have published their Input Hotplug Guide, which is a nice complement to this very document you are reading right now, since it focuses on auto-configuration of xkb with HAL. The default xkb configuration comes with a lot of flexibility, and often you don t need anything else. But when you do, then this is how to do it:

Installing a new keyboard map The most basic way to install a new keyboard map is using xkbcomp, which can also be used to dump the currently installed map into a file. So, to get a bit of an idea of what we ll be dealing with, please run the following commands:
xkbcomp $DISPLAY xkb.dump
editor xkb.dump
xkbcomp xkb.dump $DISPLAY

The file is complex and large, and it completely went against my aesthetics to simply edit it to have xkb work according to my needs. I sought a way in which I could use as much as possible of the default configuration, and only place self-contained additional snippets in place to do the things I wanted done differently. setxkbmap and rule files Thus began my voyage into the domain of rule files. But before we dive into those, let s take a look at setxkbmap. Despite the trivial invocation of e.g. setxkbmap us to install a standard US-American keyboard map, the command also takes arguments. More specifically, it allows you to specify the following high-level parameters, which determine the sequence of events between key press and an application receiving a KeyPress event:
  • Model: the keyboard model, which defines which keys are where
  • Layout: the keyboard layout, which defines what the keys actually are
  • Variant: slight variantions in the layout
  • Options: configurable aspects of keyboard features and possibilities
Thus, with the following command line, I would select a US layout with international (dead) keys for my Thinkpad keyboard, and switch to an alternate symbol group with the windows keys (more on that later):
setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch

In many cases, between all combinations of the aforementioned parameters, this is all you ever need. But I wanted more. If you append -print to the above command, it will print the keymap it would install, rather than installing it:
% setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch -print
xkb_keymap  
  xkb_keycodes    include "evdev+aliases(qwerty)"        ;
  xkb_types       include "complete"     ;
  xkb_compat      include "complete"     ;
  xkb_symbols     include "pc+us(intl)+inet(evdev)+group(win_switch)"    ;
  xkb_geometry    include "thinkpad(us)"         ;
 ;

There are two things to note:
  1. The -option grp:win_switch argument has been turned into an additional include group(win_switch) on the xkb_symbols line, just like the model, layout, and variant are responsible for other aspects in the output.
  2. The output seems related to what xkbcomp dumped into the xkb.dump file we created earlier. Upon closer inspection, it turns out that the dump file is simply a pre-processed version of the keyboard map, with include instructions exploded.
At this point, it became clear to me that this was the correct way forward, and I started to investigate those points in order. The translation from parameters to an xkb_keymap stanza by setxkbmap is actually governed by a rule file. A rule is nothing more than a set of criteria, and what setxkbmap should do in case they all match. On a Debian system, you can find this file in /usr/share/X11/xkb/rules/evdev, and /usr/share/X11/xkb/rules/evdev.lst is a listing of all available parameter values. The xkb_symbols include line in the above xkb_keymap output is the result of the following rules in the first file, which setxkbmap had matched (from top to bottom) and processed:
! model         layout              =       symbols
  [...]
  *             *                   =       pc+%l(%v)
! model                             =       symbols
  *                                 =       +inet(evdev)
! option                            =       symbols
  [...]
  grp:win_switch                    =       +group(win_switch)

It should now not be hard to deduce the xkb_symbols include line quoted above, starting from the setxkbmap command line. I ll reproduce both for you for convenience:
setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch
xkb_symbols     include "pc+us(intl)+inet(evdev)+group(win_switch)"    ;

A short note about the syntax here: group(win_switch) in the symbols column simply references the xkb_symbols stanza named win_switch in the symbols file group (/usr/share/X11/xkb/symbols/group). Thus, the rules file maps parameters to sets of snippets to include, and the output of setxkbmap applies those rules to create the xkb_keymap output, to be processed by xkbcomp (which setxkbmap invokes implicitly, unless the -print argument was given on invocation). It seems that for a criteria (option, model, layout, ) to be honoured, it has to appear in the corresponding listing file, evdev.lst in this case. There is also evdev.xml, but I couldn t figure out its role.

Attaching symbols to keys I ended up creating a symbols file of reasonable size, which I won t discuss here. Instead, let s solve the following two tasks for the purpose of this document:
  1. Make the Win-Hyphen key combination generate an en dash ( ), and Win-Shift-Hyphen an em dash ( ).
  2. Let the Caps Lock key generate Mod4, which can be used e.g. to control the window manager.
To approach these two tasks, let s create a symbols file in ~/.xkb/symbols/xkbtest and add two stanzas to it:
partial alphanumeric_keys
xkb_symbols "dashes"  
  key <AE11>  
    symbols[Group2] = [ endash, emdash ]
   ;
 ;
partial modifier_keys
xkb_symbols "caps_mod4"  
  replace key <CAPS>  
    [ VoidSymbol, VoidSymbol ]
   ;
  modifier_map Mod4   <CAPS>  ;
 ;

Now let me explain these in turn:
  1. We used the option grp:win_switch earlier, which told xkb that we would like to use the windows keys to switch to group 2. In the custom symbols file, we now simply define the symbols to be generated for each key, when the second group has been selected. Key <AE11> is the hyphen key. To find out the names of all the other keys on your keyboard, you can use the following command:
    xkbprint -label name $DISPLAY -   gv -orientation=seascape -
    
    
    I had to declare the stanza partial because it is not a complete keyboard map, but can only be used to augment/modify other maps. I also declared it alphanumeric_keys to tell xkb that I would be modifying alphanumeric keys inside it. If I also wanted to change modifier keys, I would also specify modifier_keys. The rest should be straight-forward. You can get the names of available symbols from keysymdef.h (/usr/include/X11/keysymdef.h on a Debian system, package x11proto-core-dev), stripping the XK_ prefix.
  2. The second stanza replaces the Caps Lock key definition and prevents it from generating symbols (VoidSymbol). The important aspect of the second stanza is the modifier_map instruction, which causes the key to generate the Mod4 modifier event, which I can later use to bind key combinations for my window manager (awesome).
The easiest way to verify those changes is to put the setxkbmap -print output of the keyboard map you would like to use as a baseline into ~/.xkb/keymap/xkbtest, and append snippets to be included to the xkb_symbols line, e.g.:
"pc+us(intl)+inet(evdev)+group(win_switch)+xkbtest(dashes)+xkbtest(caps_mod4)"

When you try to load this keyboard map with xkbcomp, it will fail because it cannot find the xkbtest symbol definition file. You have to let the tool know where to look, by appending a path to its search list (note the use of $HOME instead of ~, which the shell would not expand):
xkbcomp -I$HOME/.xkb ~/.xkb/keymap/xkbtest $DISPLAY

You can use xev to verify the results, or just type Win-Hyphen into a terminal; does it produce ? By the way, I found xev much more useful for such purposes when invoked as follows (thanks to Penny for the idea):
xev   sed -ne '/^KeyPress/,/^$/p'

Unfortunately, xev does not give any indication of which modifier symbols are generated. I have found no other way to verify the outcome, other than to tell my window manager to do something in response to e.g. Mod4-Enter, reloaded it, and then tried it out.

Rules again, and why I did not use them in the end Once I got this far, I proceeded to add option-to-symbol-snippet mappings to the rules file, and added each option to the listing file too. A few bugs [[!debbugs 524512 desc=later]], I finally had setxkbmap spit out the right xkb_keymap and could install the new keyboard map with xkbcomp, like so:
setxkbmap -I$HOME/.xkb [...] -print   xkbcomp -I$HOME/xkb - :0

I wrote a small script to automatically do that at the start of the X session and could have gone to play outside, if it hadn t been for the itch I felt due to the entire rule file stored in my configuration. I certainly did not like that, but I could also not find a way to extend a rule file with additional rules. When I looked at the aforementioned script again, it suddenly became obvious that I was going a far longer path than I had to. Even though the rule system is powerful and allows me to e.g. automatically include symbol maps to remap keys on my Thinkpad, based on the keyboard model I configured, the benefit (if any) did not justify the additional complexity. In the end, I simplified the script that loads the keyboard map, and defined a default xkb_keymap, as well as one for the Thinkpad, wich I identify by its fully-qualified hostname. If a specific file is available for a given host, it is used. Otherwise, the script uses the default.

10 April 2009

Jurij Smakov: Series of unfortunate events

While I did not think I would continue hacking on VoiDroid after achieving a semi-broken proof-of-concept implementation, it seems that I can t get out of it easily :-). I spent today trying to implement a solution which would allow the native C++ code to call into Java - for example, if we have a call in progress, and want to update the call status displayed to the user when it changes. The JNI library has some native callbacks which are getting called on call state changes, but getting the information to Java turned out to be a quite an experience, which, I thought, would be amusing to document, and, hopefully, save someone some quality bang-head-on-desk time. 1. Some of the threads calling native callbacks were initiated in Java (and thus known to JVM), and some where not, they were started in the native code. As JVM did not know anything about them, JNI calls would fail without too much explanation. Eventually I figured that out, and started attaching them as appropriate (JNI provides facilities for that). 2. Even after attaching, I could not resolve references to my classes from the native code, until I found a mysterious line in JNI docs: When a thread is attached to the VM, the context class loader is the bootstrap loader.. Basically, it means that when you attach a native thread, it is only capable of resolving built-in Java classes. 3. I later found a reference to a technique, suggesting to cache a good classloader from a Java-initiated thread, and later set the classloader for the attached thread to it. I implemented that, only to find out that Android s JVM is not your regular JVM, so things are slightly different here, and this technique does not work. 4. Luckily, there is an incredibly useful post describing the problem, and suggesting a workaround: caching an instance of needed class globally in native code, after calling NewGlobalRef on it, to avoid garbage collection on exit from a native function. I did that, however I only had access to the class, and Android JNI only supports calling NewGlobalRef on an object, so I had to create a dummy object in the native code (JNI_OnLoad() function, actually) and cache that. 5. That finally enabled me to call into Java from native, attached threads (w00t!). Unfortunately, you are not allowed to modify the UI (exactly what I wanted to do from my Java callback!) from a non-UI thread, which brought me right back to the drawing board. 6. As callbacks initiated from native code would not work for this reason, it was suggested to me to use a classic producer-consumer mechanism, getting Java call into the native library block until new information gets delivered by a native thread. Easily done with some pthread mutexes, and the goal is close, right? 7. Wrong! Now the problem was that if I started a UI Java thread, which would call the native function in a loop to retrieve the data (it would not actually spin, as the native function blocks until new data is available), then it would be declared not responding by the UI framework in a few seconds. I could start a non-UI thread for that, but you guessed it a non-UI thread is not allowed to update UI! Instead of resolving the problem we just shifted it into the Java land. 8. While I was simply thinking about stuffing the result into a class variable and passing it to the UI thread this way, this tutorial, showing how to display a progress indicator while doing some heavy calculations in the background, provided a more elegant idea. One can use Android s android.os.Handler class to pass messages (with arbitrary payload) between threads. That was the last piece of the puzzle, which allowed me to finally achieve a working implementation (some 10 hours or so later :-).

Next.

Previous.